Intrinsically stable IIR filters and IIR-MLP neural networks for signal processing
نویسندگان
چکیده
The capabilities of Locally Recurrent Neural Networks (LRNNs) in performing on-line Signal Processing (SP) tasks are well known [1,3,5,6,10,11,14,15]. In particular one of the most popular architecture is the Multi Layer Perceptron (MLP) with linear IIR temporal filter synapses (IIR-MLP) [3,5,10,11,14,15]. IIR-MLP is theoretically motivated as a non-linear generalization of linear adaptive IIR filters [13] and as a generalization of the popular Time Delay Neural Networks (TDNNs) [1,2,4]. In fact TDNN can be viewed as MLP with FIR temporal filter synapses (FIR-MLP) [2,3,5]. Therefore IIR-MLP are a generalization of FIR-MLP (or TDNN) allowing the temporal filters to have a recursive part. Efficient training algorithms can be developed for general LRNNs and so the IIR-MLP [10,11,14,15]. They are based on Back Propagation Through Time of the error [2] to propagate the sensitivities through time and network layers, and on a local recursive computation of output error (RPE type) [9,13]. They are named Causal Recursive Back Propagation (CRBP) [10,11,15] and Truncated Recursive Back Propagation (TRBP) [14] and they differ in the technique implemented to employ on-line computation. They are both on-line and local in space and in time, i.e. of easy implementation, and their complexity is limited and affordable. They generalize the BackTsoi algorithm [3], the algorithms in [4,6], the one by Wan [2] and standard Back Propagation. Even if CRBP and TRBP performs in a quite stable manner if the learning rate is chosen small enough by the user they do not control the stability of the IIR synapses (for the IIR-MLP) or of the recursive filters (for general LRNNs). In the following we will refer mostly to the IIR-MLP case but the extension to other LRNNs are possible and easy in most cases. The same limitation is found in the all literature of learning methods for RNNs and LRNNs, e.g. [2,3,6,9]. The problem with general RNN is that is not even easy to derive necessary and sufficient conditions for the coefficients of the network to assure asymptotic stability even in the time invariant case since the feedback loop include the non-linearity. On the other hand for LRNNs the recursion is usually separated from the non-linearity, as for the IIR-MLP. Therefore in batch mode the overall IIR-MLP is asymptotically stable if and only if each of the IIR filters is asymptotically stable, i.e. all transfer functions poles have a module less than one.
منابع مشابه
Casual BackPropagation Through Time for Locally Recurrent Neural Networks
This paper concerns dynamic neural networks for signal processing: architectural issues are considered but the paper focuses on learning algorithms that work on-line. Locally recurrent neural networks, namely MLP with IIR synapses and generalization of Local Feedback MultiLayered Networks (LF MLN), are compared to more traditional neural networks, i.e. static MLP with input and/or output buffer...
متن کاملCausal Back Propagation through Time for Locally Recurrent Neural Networks
This paper concerns dynamic neural networks for signal processing: architectural issues are considered but the paper focuses on learning algorithms that work on-line. Locally recurrent neural networks, namely MLP with IIR synapses and generalization of Local Feedback MultiLayered Networks (LF MLN), are compared to more traditional neural networks, i.e. static MLP with input and/or output buffer...
متن کاملFast adaptive IIR-MLP neural networks for signal processing applications
Neural networks with internal temporal dynamic can be applied to non-linear DSP problems. The classical fully connected recurrent architectures, can be replaced by less complex neural networks, based on the well known MultiLayer Perceptron (MLP) where the temporal dynamic is modelled by replacing each synapses either with a FIR filter or with an IIR filter. A general learning algorithm (Back-Pr...
متن کاملA new IIR-MLP learning algorithm for on-line signal processing
In this paper we propose a new learning algorithm for locally recurrent neural networks, called Truncated Recursive Back Propagation which can be easily implemented on-line with good performance. Moreover it generalises the algorithm proposed by Waibel et al. for TDNN, and includes the Back and Tsoi algorithm as well as BPS and standard on-line Back Propagation as particular cases. The proposed...
متن کاملLow latency IIR digital filter design by using metaheuristic optimization algorithms
Filters are particularly important class of LTI systems. Digital filters have great impact on modern signal processing due to their programmability, reusability, and capacity to reduce noise to a satisfactory level. From the past few decades, IIR digital filter design is an important research field. Design of an IIR digital filter with desired specifications leads to a no convex optimization pr...
متن کامل